147 research outputs found

    Developmental Learning: A Case Study in Understanding “Object Permanence”

    Get PDF
    The concepts of muddy environment and muddy tasks set the ground for us to understand the essence of intelligence, both artificial and natural, which further motivates the need of Developmental Learning for machines. In this paper, a biologically inspired computational model is proposed to study one of the fundamental and controversial issues in cognitive science – “Object Permanence.” This model is implemented on a robot, which enables us to examine the robot’s behavior based on perceptual development through realtime experiences. Our experimental result shows consistency with prior researches on human infants, which not only sheds light on the highly controversial issue of object permanence, but also demonstrates how biologically inspired developmental models can potentially develop intelligent machines and verify computationalmodeling that has been established in cognitive science

    Covert Perceptual Capability Development

    Get PDF
    In this paper, we propose a model to develop robots’ covert perceptual capability using reinforcement learning. Covert perceptual behavior is treated as action selected by a motivational system. We apply this model to vision-based navigation. The goal is to enable a robot to learn road boundary type. Instead of dealing with problems in controlled environments with a low-dimensional state space, we test the model on images captured in non-stationary environments. Incremental Hierarchical Discriminant Regression is used to generate states on the fly. Its coarse-to-fine tree structure guarantees real-time retrieval in high-dimensional state space. K Nearest-Neighbor strategy is adopted to further reduce training time complexity

    Conjunctive Visual and Auditory Development via Real-Time Dialogue

    Get PDF
    Human developmental learning is capable of dealing with the dynamic visual world, speech-based dialogue, and their complex real-time association. However, the architecture that realizes this for robotic cognitive development has not been reported in the past. This paper takes up this challenge. The proposed architecture does not require a strict coupling between visual and auditory stimuli. Two major operations contribute to the “abstraction” process: multiscale temporal priming and high-dimensional numeric abstraction through internal responses with reduced variance. As a basic principle of developmental learning, the programmer does not know the nature of the world events at the time of programming and, thus, hand-designed task-specific representation is not possible. We successfully tested the architecture on the SAIL robot under an unprecedented challenging multimodal interaction mode: use real-time speech dialogue as a teaching source for simultaneous and incremental visual learning and language acquisition, while the robot is viewing a dynamic world that contains a rotating object to which the dialogue is referring

    Developmental Robots - A New Paradigm

    Get PDF
    It has been proved to be extremely challenging for humans to program a robot to such a sufficient degree that it acts properly in a typical unknown human environment. This is especially true for a humanoid robot due to the very large number of redundant degrees of freedom and a large number of sensors that are required for a humanoid to work safely and effectively in the human environment. How can we address this fundamental problem? Motivated by human mental development from infancy to adulthood, we present a theory, an architecture, and some experimental results showing how to enable a robot to develop its mind automatically, through online, real time interactions with its environment. Humans mentally “raise” the robot through “robot sitting” and “robot schools” instead of task-specific robot programming

    Why Deep Learning's Performance Data Are Misleading

    Full text link
    This is a theoretical paper, as a companion paper of the keynote talk at the same conference AIEE 2023. In contrast to conscious learning, many projects in AI have employed so-called "deep learning" many of which seemed to give impressive performance. This paper explains that such performance data are deceptively inflated due to two misconducts: "data deletion" and "test on training set". This paper clarifies "data deletion" and "test on training set" in deep learning and why they are misconducts. A simple classification method is defined, called Nearest Neighbor With Threshold (NNWT). A theorem is established that the NNWT method reaches a zero error on any validation set and any test set using the two misconducts, as long as the test set is in the possession of the author and both the amount of storage space and the time of training are finite but unbounded like with many deep learning methods. However, many deep learning methods, like the NNWT method, are all not generalizable since they have never been tested by a true test set. Why? The so-called "test set" was used in the Post-Selection step of the training stage. The evidence that misconducts actually took place in many deep learning projects is beyond the scope of this paper.Comment: 8 pages, 2 figure

    Deep Learning Misconduct and How Conscious Learning Avoids it

    Get PDF
    “Deep learning” uses Post-Selection—selection of a model after training multiple models using data. The performance data of “Seep Learning” have been deceptively inflated due to two misconducts: 1: cheating in the absence of a test; 2: hiding bad-looking data. Through the same misconducts, a simple method Pure-Guess Nearest Neighbor (PGNN) gives no errors on any validation dataset V, as long as V is in the possession of the authors and both the amount of storage space and the time of training are finite but unbounded. The misconducts are fatal, because “Deep Learning” is not generalizable, by overfitting a sample set V. The charges here are applicable to all learning modes. This chapter proposes new AI metrics, called developmental errors for all networks trained, under four Learning Conditions: (1) a body including sensors and effectors, (2) an incremental learning architecture (due to the “big data” flaw), (3) a training experience, and (4) a limited amount of computational resources. Developmental Networks avoid Deep Learning misconduct because they train a sole system, which automatically discovers context rules on the fly by generating emergent Turing machines that are optimal in the sense of maximum likelihood across a lifetime, conditioned on the four Learning Conditions

    Brains as naturally emerging turing machines

    Full text link
    Abstract—It has been shown that a Developmental Network (DN) can learn any Finite Automaton (FA) [29] but FA is not a general purpose automaton by itself. This theoretical paper presents that the controller of any Turing Machine (TM) is equivalent to an FA. It further models a motivation-free brain — excluding motivation e.g., emotions — as a TM inside a grounded DN — DN with the real world. Unlike a traditional TM, the TM-in-DN uses natural encoding of input and output and uses emergent internal representations. In Artificial Intelligence (AI) there are two major schools, symbolism and connectionism. The theoretical result here implies that the connectionist school is at least as powerful as the symbolic school also in terms of the general-purpose nature of TM. Furthermore, any TM simulated by the DN is grounded and uses natural encoding so that the DN autonomously learns any TM directly from natural world without a need for a human to encode its input and output. This opens the door for the DN to fully autonomously learn any TM, from a human teacher, reading a book, or real world events. The motivated version of DN [31] further enables a DN to go beyond action-supervised learning — so as to learn based on pain-avoidance, pleasure seeking, and novelty seeking [31]. I
    • …
    corecore